Goto

Collaborating Authors

 robust bipedal locomotion


Template Model Inspired Task Space Learning for Robust Bipedal Locomotion

Castillo, Guillermo A., Weng, Bowen, Yang, Shunpeng, Zhang, Wei, Hereid, Ayonga

arXiv.org Artificial Intelligence

This work presents a hierarchical framework for bipedal locomotion that combines a Reinforcement Learning (RL)-based high-level (HL) planner policy for the online generation of task space commands with a model-based low-level (LL) controller to track the desired task space trajectories. Different from traditional end-to-end learning approaches, our HL policy takes insights from the angular momentum-based linear inverted pendulum (ALIP) to carefully design the observation and action spaces of the Markov Decision Process (MDP). This simple yet effective design creates an insightful mapping between a low-dimensional state that effectively captures the complex dynamics of bipedal locomotion and a set of task space outputs that shape the walking gait of the robot. The HL policy is agnostic to the task space LL controller, which increases the flexibility of the design and generalization of the framework to other bipedal robots. This hierarchical design results in a learning-based framework with improved performance, data efficiency, and robustness compared with the ALIP model-based approach and state-of-the-art learning-based frameworks for bipedal locomotion. The proposed hierarchical controller is tested in three different robots, Rabbit, a five-link underactuated planar biped; Walker2D, a seven-link fully-actuated planar biped; and Digit, a 3D humanoid robot with 20 actuated joints. The trained policy naturally learns human-like locomotion behaviors and is able to effectively track a wide range of walking speeds while preserving the robustness and stability of the walking gait even under adversarial conditions.

  robust bipedal locomotion, task space learning, template model
2309.15442
  Genre: Research Report (0.40)

Learning Linear Policies for Robust Bipedal Locomotion on Terrains with Varying Slopes

Krishna, Lokesh, Mishra, Utkarsh A., Castillo, Guillermo A., Hereid, Ayonga, Kolathaya, Shishir

arXiv.org Artificial Intelligence

Abstract-- In this paper, with a view toward deployment of light-weight control frameworks for bipedal walking robots, we realize end-foot trajectories that are shaped by a single linear feedback policy. We learn this policy via a model-free and a gradient free learning algorithm, Augmented Random Search (ARS), in the two robot platforms Rabbit and Digit. Towards the end, we also provide preliminary results of hardware transfer to Digit. Locomotion for legged robots has been an active field of research for the past decade owing to the rapid progress in actuator and sensing modules. Actuators like the BLDC Figure 1: Figure showing Rabbit (Top) and Digit (Bottom) in motors have become efficient and powerful, and sensors Simulation: Traversing incline and decline with robustness like the IMUs have become more accurate and affordable.